58,978 research outputs found
Perceptual Scale Expansion: An Efficient Angular Coding Strategy For Locomotor Space
Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration
Orbitofrontal Cortex Value Signals Depend on Fixation Location during Free Viewing
In the natural world, monkeys and humans judge the economic value of numerous competing stimuli by moving their gaze from one object to another, in a rapid series of eye movements. This suggests that the primate brain processes value serially, and that value-coding neurons may be modulated by changes in gaze. To test this hypothesis, we presented monkeys with value-associated visual cues and took the unusual step of allowing unrestricted free viewing while we recorded neurons in the orbitofrontal cortex (OFC). By leveraging natural gaze patterns, we found that a large proportion of OFC cells encode gaze location and, that in some cells, value coding is amplified when subjects fixate near the cue. These findings provide the first cellular-level mechanism for previously documented behavioral effects of gaze on valuation and suggest a major role for gaze in neural mechanisms of valuation and decision-making under ecologically realistic conditions
The interaction between gaze and facial expression in the amygdala and extended amygdala is modulated by anxiety
Behavioral evidence indicates that angry faces are seen as more threatening, and elicit greater anxiety, when directed at the observer, whereas the influence of gaze on the processing of fearful faces is less consistent. Recent research has also found inconsistent effects of expression and gaze direction on the amygdala response to facial signals of threat. However, such studies have failed to consider the important influence of anxiety on the response to signals of threat; an influence that is well established in behavioral research and recent neuroimaging studies. Here, we investigated the way in which individual differences in anxiety would influence the interactive effect of gaze and expression on the response to angry and fearful faces in the human extended amygdala. Participants viewed images of fearful, angry and neutral faces, either displaying an averted or direct gaze. We found that state anxiety predicted an increased response in the dorsal amygdala/substantia innominata (SI) to angry faces when gazing at, relative to away from the observer. By contrast, high state anxious individuals showed an increased amygdala response to fearful faces that was less dependent on gaze. In addition, the relationship between state anxiety and gaze on emotional intensity ratings mirrored the relationship between anxiety and the amygdala/SI response. These results have implications for understanding the functional role of the amygdala and extended amygdala in processing signals of threat, and are consistent with the proposed role of this region in coding the relevance or significance of a stimulus to the observer
Orbitofrontal Cortex Value Signals Depend on Fixation Location during Free Viewing
In the natural world, monkeys and humans judge the economic value of numerous competing stimuli by moving their gaze from one object to another, in a rapid series of eye movements. This suggests that the primate brain processes value serially, and that value-coding neurons may be modulated by changes in gaze. To test this hypothesis, we presented monkeys with value-associated visual cues and took the unusual step of allowing unrestricted free viewing while we recorded neurons in the orbitofrontal cortex (OFC). By leveraging natural gaze patterns, we found that a large proportion of OFC cells encode gaze location and, that in some cells, value coding is amplified when subjects fixate near the cue. These findings provide the first cellular-level mechanism for previously documented behavioral effects of gaze on valuation and suggest a major role for gaze in neural mechanisms of valuation and decision-making under ecologically realistic conditions
Fast Hands-free Writing by Gaze Direction
We describe a method for text entry based on inverse arithmetic coding that
relies on gaze direction and which is faster and more accurate than using an
on-screen keyboard.
These benefits are derived from two innovations: the writing task is matched
to the capabilities of the eye, and a language model is used to make
predictable words and phrases easier to write.Comment: 3 pages. Final versio
Gaze-dependent topography in human posterior parietal cortex.
The brain must convert retinal coordinates into those required for directing an effector. One prominent theory holds that, through a combination of visual and motor/proprioceptive information, head-/body-centered representations are computed within the posterior parietal cortex (PPC). An alternative theory, supported by recent visual and saccade functional magnetic resonance imaging (fMRI) topographic mapping studies, suggests that PPC neurons provide a retinal/eye-centered coordinate system, in which the coding of a visual stimulus location and/or intended saccade endpoints should remain unaffected by changes in gaze position. To distinguish between a retinal/eye-centered and a head-/body-centered coordinate system, we measured how gaze direction affected the representation of visual space in the parietal cortex using fMRI. Subjects performed memory-guided saccades from a central starting point to locations “around the clock.” Starting points varied between left, central, and right gaze relative to the head-/body midline. We found that memory-guided saccadotopic maps throughout the PPC showed spatial reorganization with very subtle changes in starting gaze position, despite constant retinal input and eye movement metrics. Such a systematic shift is inconsistent with models arguing for a retinal/eye-centered coordinate system in the PPC, but it is consistent with head-/body-centered coordinate representations
Gaze fixation improves the stability of expert juggling
Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled significantly less variable when fixating, compared to unconstrained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze-centered coding and/or attentional control mechanisms in the brain
Coding of the Reach Vector in Parietal Area 5d
Competing models of sensorimotor computation predict different topological constraints in the brain. Some models propose population coding of particular reference frames in anatomically distinct nodes, whereas others require no such dedicated subpopulations and instead predict that regions will simultaneously code in multiple, intermediate, reference frames. Current empirical evidence is conflicting, partly due to difficulties involved in identifying underlying reference frames. Here, we independently varied the locations of hand, gaze, and target over many positions while recording from the dorsal aspect of parietal area 5. We find that the target is represented in a predominantly hand-centered reference frame here, contrasting with the relative code seen in dorsal premotor cortex and the mostly gaze-centered reference frame in the parietal reach region. This supports the hypothesis that different nodes of the sensorimotor circuit contain distinct and systematic representations, and this constrains the types of computational model that are neurobiologically relevant
The eyes have it: Infant gaze as an indicator of hunger and satiation
Infant gaze serves as a measure of attention to food cues in adults and children and may play a role in signalling infant hunger and satiation. Maternal responsiveness to infant satiation cues, including gaze, supports healthy appetite development and may reduce obesity risk. However, mothers often experience difficulty in interpreting feeding cues, and there have been few attempts to study cues systematically. This study aimed to develop a reliable coding scheme for categorising and tracking infant gaze behaviours during complementary feeding (CF). Twenty infants aged between six and eighteen months were filmed during typical meals on two occasions at home. The Infant Gaze at Mealtime (IGM) coding scheme was devised from the analysis of a sample of videos, a piloting and testing process, and the feeding cues and developmental psychology literature. Inter and intra-rater reliability tests of the scheme with 20% of the study videos revealed high levels of reliability. When applied to the full sample of 225 video clips, the IGM coding scheme revealed a significant decrease over time in the frequency of infants gazing at food and a significant increase in exploratory gaze behaviour within a meal. These changes were consistent across main and dessert courses, suggesting they may be indicative of changes in infant feeding state. The results suggest that infant gaze may offer a means of identifying infant hunger and satiation and, as an easily observed behaviour, an effective tool for mothers and professionals for promoting responsive feeding
Recommended from our members
Attention neglects a stare-in-the-crowd: Unanticipated consequences of prediction-error coding.
Direct gaze - someone looking at you - is an important and subjectively-salient stimulus. Its processing is thought to be enhanced by the brain's internalised predictions - priors - that effectively specify it as the most likely gaze direction. Current consensus holds that, befitting its presumed importance, direct gaze attracts attention more powerfully than other gazes. Conversely, some Predictive Coding (PC) models, in which exogenous attention is drawn to stimuli that violate predictions, may be construed as making the opposite claim - i.e., exogenous attention should be biased away from direct gaze (which conforms to internal predictions), toward averted gaze (which does not). Here, searching displays with salient, 'odd-one-out' gazes, we observed attentional bias (in rapid, initial saccades) toward averted gaze, as would be expected by PC models. However, this pattern obtained only when conditions highlighted gaze-uniqueness. We speculate that, in our experiments, task requirements determined how prediction influenced perception.N/
- …